Search results for "speech synthesis"
showing 8 items of 8 documents
Linguistic interpretation of speech errors
2016
The paper is an attempt to illustrate the linguistic interpretation of speech, known that it remains insufficiently resolved, especially for Romanian. The cause is given by the multitude of criteria that can or should be considered important in speech processing. The aim of this study is to develope a computational tool in order to identify the possible errors related to the morphosintactic structure of speech. Our goal is to assist users who can receive automatically different suggestions that can help them to improve the quality of their text. Thus, we chose an interdisciplinary approach through speech analysis that brings together the key fields of linguistics, computer science and so on…
Cognitive factors in the evaluation of synthetic speech
1998
Abstract This paper illustrates the importance of various cognitive factors involved in perceiving and comprehending synthetic speech. It includes findings drawn from the relative psychological and psycholinguistic literature together with experimental results obtained at the Fondazione Ugo Bordoni laboratory. Overall, it is shown that listening to and comprehending synthetic voices is more difficult than with a natural voice. However, and more importantly, this difficulty can and does decrease with the subjects' exposure to said synthetic voices. Furthermore, greater workload demands are associated with synthetic speech and subjects listening to synthetic passages are required to pay more …
A Web-Oriented Java3D Talking Head
2009
Facial animation denotes all those systems performing speech synchro- nization with an animated face model. These kinds of systems are named Talking Heads or Talking Faces. At the same time simple dialogue systems called chatbots have been developed. Chatbots are software agents able to interact with users through pattern-matching based rules. In this paper a Talking Head oriented to the creation of a Chatbot is presented. An answer is generated in form of text trig- gered by an input query. The answer is converted into a facial animation using a 3D face model whose lips movements are synchronized with the sound produced by a speech synthesis module. Our Talking Head exploits the naturalnes…
A Java 3D Talking Head for a Chatbot
2008
Facial animation is referred to all those systems per- forming the speech synchronization with an animated face model. This kind of systems are called ”Talking Head” or ”Talking Face”. In this paper a Talking Head oriented to the creation of a Chatbot is presented. It requires an in- put query and an answer is generated in form of text. The answer is transduced into a facial animation using a 3D face model whose lips movements are synchronized with the sound produced by a speech synthesis module. Our ”Talk- ing Head” explores the naturalness of the facial animation and provides a real-time interactive interface to the user. The WEB infrastructure has been realized using the Client- Server m…
Virtual conversation with a real talking head
2008
A talking head is system performing an animated face model synchronized with a speech synthesis module. It is used as a presentation layer of a conversational Agent which provide an answer. It provides an answer when a query is written as an input by the user. The textual answer is converted into facial movements of a 3D face model whose lips and tongue movements are synchronized with the sound of the synthetic voice. The Client-Server paradigm has been used for the WEB infrastructure delegating the animation and synchronization to the client, so that the server can satisfy multiple requests from clients; while the Chatbot, the Digital Signal Processing and the Natural language Processing a…
An Emotional Talking Head for a Humoristic Chatbot
2011
The interest about enhancing the interface usability of applications and entertainment platforms has increased in last years. The research in human-computer interaction on conversational agents, named also chatbots, and natural language dialogue systems equipped with audio-video interfaces has grown as well. One of the most pursued goals is to enhance the realness of interaction of such systems. For this reason they are provided with catchy interfaces using humanlike avatars capable to adapt their behavior according to the conversation content. This kind of agents can vocally interact with users by using Automatic Speech Recognition (ASR) and Text To Speech (TTS) systems; besides they can c…
Arte da Linguagem na Era da Panofonia
2018
This article calls attention to the hybrid genre of voice-based performances and its blending of the supposed binaries of human and machinic speech. Using the concept of panophonia, the author refers to the animatronic sculptures of speaking figures created by Ken Feingold and to Mark Böhlen’s talking robots. Through their comparative analysis, the author explores different poetic metalanguages both artists create to deconstruct communicative structures that demarcate post-human era.
A multimodal chat-bot based information technology system
2006
The proposed system integrates chat-bot and speech recognition technologies in order to build a versatile, user-friendly, virtual assistant guide with information retrieval capabilities. The system is adaptable to the user needs of mobility being also usable on different devices (i.e. PDAs, Smartphone). The system has been implemented on a Qtek 9090 with Windows Mobile 2003 and a simulation for the cultural heritage domain is here presented.